3 research outputs found

    A parallel implementation of 3D Zernike moment analysis

    Get PDF
    Zernike polynomials are a well known set of functions that find many applications in image or pattern characterization because they allow to construct shape descriptors that are invariant against translations, rotations or scale changes. The concepts behind them can be extended to higher dimension spaces, making them also fit to describe volumetric data. They have been less used than their properties might suggest due to their high computational cost. We present a parallel implementation of 3D Zernike moments analysis, written in C with CUDA extensions, which makes it practical to employ Zernike descriptors in interactive applications, yielding a performance of several frames per second in voxel datasets about 2003 in size. In our contribution, we describe the challenges of implementing 3D Zernike analysis in a general-purpose GPU. These include how to deal with numerical inaccuracies, due to the high precision demands of the algorithm, or how to deal with the high volume of input data so that it does not become a bottleneck for the system

    Composition of texture atlases for 3D mesh multi-texturing

    Get PDF
    We introduce an automatic technique for mapping onto a 3D triangle mesh, approximating the shape of a real 3D object, a high resolution texture synthesized from several pictures taken simultaneously by real cameras surrounding the object. We create a texture atlas by first unwrapping the 3D mesh to form a set of 2D patches with no distortion (i.e., the angles and relative sizes of the 3D triangles are preserved in the atlas), and then mixing the color information from the input images, through another three steps: step no. 2 packs the 2D patches so that the bounding canvas of the set is as small as possible; step no. 3 assigns at most one triangle to each canvas pixel; finally, in step no. 4, the color of each pixel is calculated as a smoothly varying weighted average of the corresponding pixels from several input photographs. Our method is especially good for the creation of realistic 3D models without the need of having graphic artists retouch the texture. Categories and Subject Descriptors (according to ACM CCS): Computer Graphics [1.3.7]: Three-Dimensional Graphics and Realism—Color, shading, shadowing, and textur

    Face Lift Surgery for Reconstructed Virtual Humans

    No full text
    We introduce an innovative, semi-automatic method to transform low resolution facial meshes into high definition ones, based on the tailoring of a generic, neutral human head model, designed by an artist, to fit the facial features of a specific person. To determine these facial features we need to select a set of "control points" (corners of eyes, lips, etc.) in at least two photographs of the subject's face. The neutral head mesh is then automatically reshaped according to the relation between the control points in the original subject's mesh through a set of transformation pyramids. The last step consists in merging both meshes and filling the gaps that appear in the previous process. This algorithm avoids the use of expensive and complicated technologies to obtain depth maps, which also need to be meshed later
    corecore